Florens Technology Development Laptops & Desktops Driver Download For Windows



  1. Florens Technology Development Laptops Reviews
  2. Technology Development Pdf
  3. Development Of Hardware Technology

Computers and Electronics in Agriculture provides international coverage of advances in the development and application of computer hardware, software, electronic instrumentation, and control systems for solving problems in agriculture, including agronomy, horticulture (in both its food and amenity aspects), forestry, aquaculture, and animal. Validity Fingerprint Common Driver for Windows 10 (32-bit, 64-bit) - Desktops, Laptops and Workstations. Theories of technology attempt to explain the factors that shape technological innovation as well as the impact of technology on society and culture. Most contemporary theories of technology reject two previous views: the linear model of technological innovation and technological determinism. Microsoft® ODBC Driver 13.1 for SQL Server® - Windows, Linux, & macOS The Microsoft ODBC Driver for SQL Server provides native connectivity from Windows, Linux, & macOS to Microsoft SQL Server and Microsoft Azure SQL Database.

First published Mon Dec 18, 2000; substantive revision Fri Jun 9, 2006

Historically, computers were human clerks who calculated in accordancewith effective methods. These human computers did the sorts ofcalculation nowadays carried out by electronic computers, and manythousands of them were employed in commerce, government, and researchestablishments. The term computing machine, used increasinglyfrom the 1920s, refers to any machine that does the work of a humancomputer, i.e., any machine that calculates in accordance witheffective methods. During the late 1940s and early 1950s, with theadvent of electronic computing machines, the phrase ‘computingmachine’ gradually gave way simply to ‘computer’,initially usually with the prefix ‘electronic’ or‘digital’. This entry surveys the history of thesemachines.

Babbage

Charles Babbage was Lucasian Professor of Mathematics at CambridgeUniversity from 1828 to 1839 (a post formerly held by Isaac Newton).Babbage's proposed Difference Engine was a special-purpose digitalcomputing machine for the automatic production of mathematical tables(such as logarithm tables, tide tables, and astronomical tables). TheDifference Engine consisted entirely of mechanical components —brass gear wheels, rods, ratchets, pinions, etc. Numbers wererepresented in the decimal system by the positions of 10-toothed metalwheels mounted in columns. Babbage exhibited a small working model in1822. He never completed the full-scale machine that he had designedbut did complete several fragments. The largest — one ninth ofthe complete calculator — is on display in the London ScienceMuseum. Babbage used it to perform serious computational work,calculating various mathematical tables. In 1990, Babbage's DifferenceEngine No. 2 was finally built from Babbage's designs and is also ondisplay at the London Science Museum.

The Swedes Georg and Edvard Scheutz (father and son) constructed amodified version of Babbage's Difference Engine. Three were made, aprototype and two commercial models, one of these being sold to anobservatory in Albany, New York, and the other to theRegistrar-General's office in London, where it calculated and printedactuarial tables.

Babbage's proposed Analytical Engine, considerably more ambitiousthan the Difference Engine, was to have been a general-purposemechanical digital computer. The Analytical Engine was to have had amemory store and a central processing unit (or ‘mill’) andwould have been able to select from among alternative actionsconsequent upon the outcome of its previous actions (a facilitynowadays known as conditional branching). The behaviour of theAnalytical Engine would have been controlled by a program ofinstructions contained on punched cards connected together with ribbons(an idea that Babbage had adopted from the Jacquard weaving loom).Babbage emphasised the generality of the Analytical Engine, saying‘the conditions which enable a finite machine to makecalculations of unlimited extent are fulfilled in the AnalyticalEngine’ (Babbage [1994], p. 97).

Babbage worked closely with Ada Lovelace, daughter of the poetByron, after whom the modern programming language ADA is named.Lovelace foresaw the possibility of using the Analytical Engine fornon-numeric computation, suggesting that the Engine might even becapable of composing elaborate pieces of music.

A large model of the Analytical Engine was under construction at thetime of Babbage's death in 1871 but a full-scale version was neverbuilt. Babbage's idea of a general-purpose calculating engine was neverforgotten, especially at Cambridge, and was on occasion a lively topicof mealtime discussion at the war-time headquarters of the GovernmentCode and Cypher School, Bletchley Park, Buckinghamshire, birthplace ofthe electronic digital computer.

Analog computers

The earliest computing machines in wide use were not digital butanalog. In analog representation, properties of the representationalmedium ape (or reflect or model) properties of the representedstate-of-affairs. (In obvious contrast, the strings of binary digitsemployed in digital representation do not represent by meansof possessing some physical property — such as length —whose magnitude varies in proportion to the magnitude of the propertythat is being represented.) Analog representations form a diverseclass. Some examples: the longer a line on a road map, the longer theroad that the line represents; the greater the number of clear plasticsquares in an architect's model, the greater the number of windows inthe building represented; the higher the pitch of an acoustic depthmeter, the shallower the water. In analog computers, numericalquantities are represented by, for example, the angle of rotation of ashaft or a difference in electrical potential. Thus the output voltageof the machine at a time might represent the momentary speed of theobject being modelled.

As the case of the architect's model makes plain, analogrepresentation may be discrete in nature (there is no suchthing as a fractional number of windows). Among computer scientists,the term ‘analog’ is sometimes used narrowly, to indicaterepresentation of one continuously-valued quantity by another(e.g., speed by voltage). As Brian Cantwell Smith has remarked:

‘Analog’ should … be a predicate on arepresentation whose structure corresponds to that of which itrepresents … That continuous representations should historicallyhave come to be called analog presumably betrays the recognition that,at the levels at which it matters to us, the world is morefoundationally continuous than it is discrete. (Smith [1991], p.271)

James Thomson, brother of Lord Kelvin, invented the mechanicalwheel-and-disc integrator that became the foundation of analogcomputation (Thomson [1876]). The two brothers constructed a device forcomputing the integral of the product of two given functions, andKelvin described (although did not construct) general-purpose analogmachines for integrating linear differential equations of any order andfor solving simultaneous linear equations. Kelvin's most successfulanalog computer was his tide predicting machine, which remained in useat the port of Liverpool until the 1960s. Mechanical analog devicesbased on the wheel-and-disc integrator were in use during World War Ifor gunnery calculations. Following the war, the design of theintegrator was considerably improved by Hannibal Ford (Ford [1919]).

Stanley Fifer reports that the first semi-automatic mechanicalanalog computer was built in England by the Manchester firm ofMetropolitan Vickers prior to 1930 (Fifer [1961], p. 29); however, Ihave so far been unable to verify this claim. In 1931, Vannevar Bush,working at MIT, built the differential analyser, the first large-scaleautomatic general-purpose mechanical analog computer. Bush's design wasbased on the wheel and disc integrator. Soon copies of his machine werein use around the world (including, at Cambridge and ManchesterUniversities in England, differential analysers built out of kit-setMeccano, the once popular engineering toy).

It required a skilled mechanic equipped with a lead hammer to set upBush's mechanical differential analyser for each new job. Subsequently,Bush and his colleagues replaced the wheel-and-disc integrators andother mechanical components by electromechanical, and finally byelectronic, devices.

A differential analyser may be conceptualised as a collection of‘black boxes’ connected together in such a way as to allowconsiderable feedback. Each box performs a fundamental process, forexample addition, multiplication of a variable by a constant, andintegration. In setting up the machine for a given task, boxes areconnected together so that the desired set of fundamental processes isexecuted. In the case of electrical machines, this was done typicallyby plugging wires into sockets on a patch panel (computing machineswhose function is determined in this way are referred to as‘program-controlled’).

Since all the boxes work in parallel, an electronic differentialanalyser solves sets of equations very quickly. Against this has to beset the cost of massaging the problem to be solved into the formdemanded by the analog machine, and of setting up the hardware toperform the desired computation. A major drawback of analog computationis the higher cost, relative to digital machines, of an increase inprecision. During the 1960s and 1970s, there was considerable interestin ‘hybrid’ machines, where an analog section is controlledby and programmed via a digital section. However, such machines are nowa rarity.

The Universal Turing Machine

In 1936, at Cambridge University, Turing invented the principle of themodern computer. He described an abstract digital computing machineconsisting of a limitless memory and a scanner that moves back andforth through the memory, symbol by symbol, reading what it finds andwriting further symbols (Turing [1936]). The actions of the scanner aredictated by a program of instructions that is stored in the memory inthe form of symbols. This is Turing's stored-program concept, andimplicit in it is the possibility of the machine operating on andmodifying its own program. (In London in 1947, in the course of whatwas, so far as is known, the earliest public lecture to mentioncomputer intelligence, Turing said, ‘What we want is a machinethat can learn from experience’, adding that the‘possibility of letting the machine alter its own instructionsprovides the mechanism for this’ (Turing [1947] p. 393). Turing'scomputing machine of 1936 is now known simply as the universal Turingmachine. Cambridge mathematician Max Newman remarked that right fromthe start Turing was interested in the possibility of actually buildinga computing machine of the sort that he had described (Newman ininterview with Christopher Evans in Evans [197?].

From the start of the Second World War Turing was a leadingcryptanalyst at the Government Code and Cypher School, Bletchley Park.Here he became familiar with Thomas Flowers' work involving large-scalehigh-speed electronic switching (described below). However, Turingcould not turn to the project of building an electronic stored-programcomputing machine until the cessation of hostilities in Europe in1945.

During the wartime years Turing did give considerable thought to thequestion of machine intelligence. Colleagues at Bletchley Park recallnumerous off-duty discussions with him on the topic, and at one pointTuring circulated a typewritten report (now lost) setting out some ofhis ideas. One of these colleagues, Donald Michie (who later foundedthe Department of Machine Intelligence and Perception at the Universityof Edinburgh), remembers Turing talking often about the possibility ofcomputing machines (1) learning from experience and (2) solvingproblems by means of searching through the space of possible solutions,guided by rule-of-thumb principles (Michie in interview with Copeland,1995). The modern term for the latter idea is ‘heuristicsearch’, a heuristic being any rule-of-thumb principle that cutsdown the amount of searching required in order to find a solution to aproblem. At Bletchley Park Turing illustrated his ideas on machineintelligence by reference to chess. Michie recalls Turing experimentingwith heuristics that later became common in chess programming (inparticular minimax and best-first).

Further information about Turing and the computer, including hiswartime work on codebreaking and his thinking about artificialintelligence and artificial life, can be found in Copeland 2004.

Electromechanical versus Electronic Computation

With some exceptions — including Babbage's purely mechanicalengines, and the finger-powered National Accounting Machine - earlydigital computing machines were electromechanical. That is to say,their basic components were small, electrically-driven, mechanicalswitches called ‘relays’. These operate relatively slowly,whereas the basic components of an electronic computer —originally vacuum tubes (valves) — have no moving parts saveelectrons and so operate extremely fast. Electromechanical digitalcomputing machines were built before and during the second world war by(among others) Howard Aiken at Harvard University, George Stibitz atBell Telephone Laboratories, Turing at Princeton University andBletchley Park, and Konrad Zuse in Berlin. To Zuse belongs the honourof having built the first working general-purpose program-controlleddigital computer. This machine, later called the Z3, was functioning in1941. (A program-controlled computer, as opposed to a stored-programcomputer, is set up for a new task by re-routing wires, by means ofplugs etc.)

Relays were too slow and unreliable a medium for large-scalegeneral-purpose digital computation (although Aiken made a valianteffort). It was the development of high-speed digital techniques usingvacuum tubes that made the modern computer possible.

The earliest extensive use of vacuum tubes for digitaldata-processing appears to have been by the engineer Thomas Flowers,working in London at the British Post Office Research Station at DollisHill. Electronic equipment designed by Flowers in 1934, for controllingthe connections between telephone exchanges, went into operation in1939, and involved between three and four thousand vacuum tubes runningcontinuously. In 1938–1939 Flowers worked on an experimentalelectronic digital in the machine.… Certain of these numbers, or 'words' are read, one afteranother, as orders. In one possible type of machine an order consistsof four numbers, for example 11, 13, 27, 4. The number 4 signifies'add', and when control shifts to this word the 'houses' H11 and H13will be connected to the adder as inputs, and H27 as output. Thenumbers stored in H11 and H13 pass through the adder, are added, andthe sum is passed on to H27. The control then shifts to the next order.In most real machines the process just described would be done by threeseparate orders, the first bringing [H11] (=content of H11) to acentral accumulator, the second adding [H13] into the accumulator, andthe third sending the result to H27; thus only one address would berequired in each order. … A machine with storage, with thisautomatic-telephone-exchange arrangement and with the necessary adders,subtractors and so on, is, in a sense, already a universal machine.(Newman [1948], pp. 271–272)

Following this explanation of Turing's three-address concept (source1, source 2, destination, function) Newman went on to describe programstorage (‘the orders shall be in a series of houses X1, X2,…’) and conditional branching. He then summed up:

From this highly simplified account it emerges that theessential internal parts of the machine are, first, a storage fornumbers (which may also be orders). … Secondly, adders,multipliers, etc. Thirdly, an 'automatic telephone exchange' forselecting 'houses', connecting them to the arithmetic organ, andwriting the answers in other prescribed houses. Finally, means ofmoving control at any stage to any chosen order, if a certain conditionis satisfied, otherwise passing to the next order in the normalsequence. Besides these there must be ways of setting up the machine atthe outset, and extracting the final answer in useable form. (Newman[1948], pp. 273–4)

In a letter written in 1972 Williams described in some detail whathe and Kilburn were told by Newman:

About the middle of the year [1946] the possibility of anappointment at Manchester University arose and I had a talk withProfessor Newman who was already interested in the possibility ofdeveloping computers and had acquired a grant from the Royal Society of£30,000 for this purpose. Since he understood computers and Iunderstood electronics the possibilities of fruitful collaboration wereobvious. I remember Newman giving us a few lectures in which heoutlined the organisation of a computer in terms of numbers beingidentified by the address of the house in which they were placed and interms of numbers being transferred from this address, one at a time, toan accumulator where each entering number was added to what was alreadythere. At any time the number in the accumulator could be transferredback to an assigned address in the store and the accumulator clearedfor further use. The transfers were to be effected by a stored programin which a list of instructions was obeyed sequentially. Orderedprogress through the list could be interrupted by a test instructionwhich examined the sign of the number in the accumulator. Thereafteroperation started from a new point in the list of instructions. Thiswas the first information I received about the organisation ofcomputers. … Our first computer was the simplest embodiment ofthese principles, with the sole difference that it used a subtractingrather than an adding accumulator. (Letter from Williams to Randell,1972; in Randell [1972], p. 9)

Turing's early input to the developments at Manchester, hinted at byWilliams in his above-quoted reference to Turing, may have been via thelectures on computer design that Turing and Wilkinson gave in Londonduring the period December 1946 to February 1947 (Turing and Wilkinson[1946–7]). The lectures were attended by representatives ofvarious organisations planning to use or build an electronic computer.Kilburn was in the audience (Bowker and Giordano [1993]). (Kilburnusually said, when asked from where he obtained his basic knowledge ofthe computer, that he could not remember (letter from Brian Napper toCopeland, 2002); for example, in a 1992 interview he said:‘Between early 1945 and early 1947, in that period, somehow orother I knew what a digital computer was … Where I got thisknowledge from I've no idea’ (Bowker and Giordano [1993], p.19).)

Whatever role Turing's lectures may have played in informingKilburn, there is little doubt that credit for the Manchester computer— called the ‘Newman-Williams machine’ in acontemporary document (Huskey 1947) — belongs not only toWilliams and Kilburn but also to Newman, and that the influence onNewman of Turing's 1936 paper was crucial, as was the influence ofFlowers' Colossus.

The first working AI program, a draughts (checkers) player writtenby Christopher Strachey, ran on the Ferranti Mark I in the ManchesterComputing Machine Laboratory. Strachey (at the time a teacher at HarrowSchool and an amateur programmer) wrote the program with Turing'sencouragement and utilising the latter's recently completedProgrammers' Handbook for the Ferranti. (Strachey later became Directorof the Programming Research Group at Oxford University.) By the summerof 1952, the program could, Strachey reported, ‘play a completegame of draughts at a reasonable speed’. (Strachey's program formed thebasis for Arthur Samuel's well-known checkers program.) The firstchess-playing program, also, was written for the Manchester Ferranti,by Dietrich Prinz; the program first ran in November 1951. Designed forsolving simple problems of the mate-in-two variety, the program wouldexamine every possible move until a solution was found. Turing startedto program his ‘Turochamp’ chess-player on the FerrantiMark I, but never completed the task. Unlike Prinz's program, theTurochamp could play a complete game (when hand-simulated) and operatednot by exhaustive search but under the guidance of heuristics.

ENIAC and EDVAC

The first fully functioning electronic digital computer to be built inthe U.S. was ENIAC, constructed at the Moore School of ElectricalEngineering, University of Pennsylvania, for the Army OrdnanceDepartment, by J. Presper Eckert and John Mauchly. Completed in 1945,ENIAC was somewhat similar to the earlier Colossus, but considerablylarger and more flexible (although far from general-purpose). Theprimary function for which ENIAC was designed was the calculation oftables used in aiming artillery. ENIAC was not a stored-programcomputer, and setting it up for a new job involved reconfiguring themachine by means of plugs and switches. For many years, ENIAC wasbelieved to have been the first functioning electronic digitalcomputer, Colossus being unknown to all but a few.

In 1944, John von Neumann joined the ENIAC group. He had become‘intrigued’ (Goldstine's word, [1972], p. 275) withTuring's universal machine while Turing was at Princeton Universityduring 1936–1938. At the Moore School, von Neumann emphasised theimportance of the stored-program concept for electronic computing,including the possibility of allowing the machine to modify its ownprogram in useful ways while running (for example, in order to controlloops and branching). Turing's paper of 1936 (‘On ComputableNumbers, with an Application to the Entscheidungsproblem’) wasrequired reading for members of von Neumann's post-war computer projectat the Institute for Advanced Study, Princeton University (letter fromJulian Bigelow to Copeland, 2002; see also Copeland [2004], p. 23).Eckert appears to have realised independently, and prior to vonNeumann's joining the ENIAC group, that the way to take full advantageof the speed at which data is processed by electronic circuits is toplace suitably encoded instructions for controlling the processing inthe same high-speed storage devices that hold the data itself(documented in Copeland [2004], pp. 26–7). In 1945, while ENIACwas still under construction, von Neumann produced a draft report,mentioned previously, setting out the ENIAC group's ideas for anelectronic stored-program general-purpose digital computer, the EDVAC(von Neuman [1945]). The EDVAC was completed six years later, but notby its originators, who left the Moore School to build computerselsewhere. Lectures held at the Moore School in 1946 on the proposedEDVAC were widely attended and contributed greatly to the disseminationof the new ideas.

Von Neumann was a prestigious figure and he made the concept of ahigh-speed stored-program digital computer widely known through hiswritings and public addresses. As a result of his high profile in thefield, it became customary, although historically inappropriate, torefer to electronic stored-program digital computers as ‘vonNeumann machines’.

The Los Alamos physicist Stanley Frankel, responsible with vonNeumann and others for mechanising the large-scale calculationsinvolved in the design of the atomic bomb, has described von Neumann'sview of the importance of Turing's 1936 paper, in a letter:

I know that in or about 1943 or ‘44 von Neumann waswell aware of the fundamental importance of Turing's paper of 1936… Von Neumann introduced me to that paper and at his urging Istudied it with care. Many people have acclaimed von Neumann as the'father of the computer' (in a modern sense of the term) but I am surethat he would never have made that mistake himself. He might well becalled the midwife, perhaps, but he firmly emphasized to me, and toothers I am sure, that the fundamental conception is owing to Turing,in so far as not anticipated by Babbage … Both Turing and vonNeumann, of course, also made substantial contributions to the'reduction to practice' of these concepts but I would not regard theseas comparable in importance with the introduction and explication ofthe concept of a computer able to store in its memory its program ofactivities and of modifying that program in the course of theseactivities. (Quoted in Randell [1972], p. 10)

Other Notable Early Computers

Other notable early stored-program electronic digital computers were:

  • EDSAC, 1949, built at Cambridge University by Maurice Wilkes
  • BINAC, 1949, built by Eckert's and Mauchly's Electronic ControlCo., Philadelphia (opinions differ over whether BINAC ever actuallyworked)
  • Whirlwind I, 1949, Digital Computer Laboratory, MassachusettsInstitute of Technology, Jay Forrester
  • SEAC, 1950, US Bureau of Standards Eastern Division, WashingtonD.C., Samuel Alexander, Ralph Slutz
  • SWAC, 1950, US Bureau of Standards Western Division, Institute forNumerical Analysis, University of California at Los Angeles, HarryHuskey
  • UNIVAC, 1951, Eckert-Mauchly Computer Corporation, Philadelphia(the first computer to be available commercially in the U.S.)
  • the IAS computer, 1952, Institute for Advanced Study, PrincetonUniversity, Julian Bigelow, Arthur Burks, Herman Goldstine, vonNeumann, and others (thanks to von Neumann's publishing thespecifications of the IAS machine, it became the model for a group ofcomputers known as the Princeton Class machines; the IAS computer wasalso a strong influence on the IBM 701)
  • IBM 701, 1952, International Business Machine's first mass-producedelectronic stored-program computer.

High-Speed Memory

The EDVAC and ACE proposals both advocated the use of mercury-filledtubes, called ‘delay lines’, for high-speed internalmemory. This form of memory is known as acoustic memory. Delay lineshad initially been developed for echo cancellation in radar; the ideaof using them as memory devices originated with Eckert at the MooreSchool. Here is Turing's description:

It is proposed to build 'delay line' units consisting ofmercury … tubes about 5′ long and 1″ in diameter incontact with a quartz crystal at each end. The velocity of sound in… mercury … is such that the delay will be 1.024 ms. Theinformation to be stored may be considered to be a sequence of 1024‘digits’ (0 or 1) … These digits will be representedby a corresponding sequence of pulses. The digit 0 … will berepresented by the absence of a pulse at the appropriate time, thedigit 1 … by its presence. This series of pulses is impressed onthe end of the line by one piezo-crystal, it is transmitted down theline in the form of supersonic waves, and is reconverted into a varyingvoltage by the crystal at the far end. This voltage is amplifiedsufficiently to give an output of the order of 10 volts peak to peakand is used to gate a standard pulse generated by the clock. This pulsemay be again fed into the line by means of the transmitting crystal, orwe may feed in some altogether different signal. We also have thepossibility of leading the gated pulse to some other part of thecalculator, if we have need of that information at the time. Making useof the information does not of course preclude keeping it also. (Turing[1945], p. 375)

Mercury delay line memory was used in EDSAC, BINAC, SEAC, PilotModel ACE, EDVAC, DEUCE, and full-scale ACE (1958). The chief advantageof the delay line as a memory medium was, as Turing put it, that delaylines were 'already a going concern' (Turing [1947], p. 380). Thefundamental disadvantages of the delay line were that random access isimpossible and, moreover, the time taken for an instruction, or number,to emerge from a delay line depends on where in the line it happens tobe.

In order to minimize waiting-time, Turing arranged for instructionsto be stored not in consecutive positions in the delay line, but inrelative positions selected by the programmer in such a way that eachinstruction would emerge at exactly the time it was required, in so faras this was possible. Each instruction contained a specification of thelocation of the next. This system subsequently became known as‘optimum coding’. It was an integral feature of everyversion of the ACE design. Optimum coding made for difficult and untidyprogramming, but the advantage in terms of speed was considerable.Thanks to optimum coding, the Pilot Model ACE was able to do a floatingpoint multiplication in 3 milliseconds (Wilkes's EDSAC required 4.5milliseconds to perform a single fixed point multiplication).

In the Williams tube or electrostatic memory, previously mentioned,a two-dimensional rectangular array of binary digits was stored on theface of a commercially-available cathode ray tube. Access to data wasimmediate. Williams tube memories were employed in the Manchesterseries of machines, SWAC, the IAS computer, and the IBM 701, and amodified form of Williams tube in Whirlwind I (until replacement bymagnetic core in 1953).

Drum memories, in which data was stored magnetically on the surfaceof a metal cylinder, were developed on both sides of the Atlantic. Theinitial idea appears to have been Eckert's. The drum providedreasonably large quantities of medium-speed memory and was used tosupplement a high-speed acoustic or electrostatic memory. In 1949, theManchester computer was successfully equipped with a drum memory; thiswas constructed by the Manchester engineers on the model of a drumdeveloped by Andrew Booth at Birkbeck College, London.

The final major event in the early history of electronic computationwas the development of magnetic core memory. Jay Forrester realisedthat the hysteresis properties of magnetic core (normally used intransformers) lent themselves to the implementation of athree-dimensional solid array of randomly accessible storage points. In1949, at Massachusetts Institute of Technology, he began to investigatethis idea empirically. Forrester's early experiments with metallic coresoon led him to develop the superior ferrite core memory. DigitalEquipment Corporation undertook to build a computer similar to theWhirlwind I as a test vehicle for a ferrite core memory. The MemoryTest Computer was completed in 1953. (This computer was used in 1954for the first simulations of neural networks, by Belmont Farley andWesley Clark of MIT's Lincoln Laboratory (see Copeland and Proudfoot[1996]).

Once the absolute reliability, relative cheapness, high capacity andpermanent life of ferrite core memory became apparent, core soonreplaced other forms of high-speed memory. The IBM 704 and 705computers (announced in May and October 1954, respectively) broughtcore memory into wide use.

Bibliography

Works Cited

  • Babbage, C. (ed. by Campbell-Kelly, M.), 1994, Passages fromthe Life of a Philosopher, New Brunswick: Rutgers UniversityPress
  • Bennett, S., 1976, ‘F.C. Williams: his contribution to thedevelopment of automatic control’, National Archive for theHistory of Computing, University of Manchester, England. (This is atypescript based on interviews with Williams in 1976.)
  • Bowker, G., and Giordano, R., 1993, ‘Interview with TomKilburn’, Annals of the History of Computing,15: 17–32.
  • Copeland, B.J. (ed.), 2004, The Essential Turing OxfordUniversity Press
  • Copeland, B.J. (ed.), 2005, Alan Turing's Automatic ComputingEngine: The Master Codebreaker's Struggle to Build the ModernComputer Oxford University Press
  • Copeland, B.J. and others, 2006, Colossus: The Secrets ofBletchley Park's Codebreaking Computers Oxford UniversityPress
  • Copeland, B.J., and Proudfoot, D., 1996, ‘On Alan Turing'sAnticipation of Connectionism’ Synthese,108: 361–377
  • Evans, C., 197?, interview with M.H.A. Newman in ‘ThePioneers of Computing: an Oral History of Computing’, London:Science Museum
  • Fifer, S., 1961, Analog Computation: Theory, Techniques,Applications New York: McGraw-Hill
  • Ford, H., 1919, ‘Mechanical Movement’, OfficialGazette of the United States Patent Office, October 7, 1919:48
  • Goldstine, H., 1972, The Computer from Pascal to vonNeumann Princeton University Press
  • Huskey, H.D., 1947, ‘The State of the Art in ElectronicDigital Computing in Britain and the United States’, in [Copeland2005]
  • Newman, M.H.A., 1948, ‘General Principles of the Design ofAll-Purpose Computing Machines’ Proceedings of the RoyalSociety of London, series A, 195 (1948):271–274
  • Randell, B., 1972, ‘On Alan Turing and the Origins of DigitalComputers’, in Meltzer, B., Michie, D. (eds), MachineIntelligence 7, Edinburgh: Edinburgh University Press, 1972
  • Smith, B.C., 1991, ‘The Owl and the ElectricEncyclopaedia’, Artificial Intelligence,47: 251–288
  • Thomson, J., 1876, ‘On an Integrating Machine Having a NewKinematic Principle’ Proceedings of the Royal Society ofLondon, 24: 262–5
  • Turing, A.M., 1936, ‘On Computable Numbers, with anApplication to the Entscheidungsproblem’ Proceedings of theLondon Mathematical Society, Series 2, 42(1936–37): 230–265. Reprinted in The EssentialTuring (Copeland [2004]).
  • Turing, A.M, 1945, ‘Proposed Electronic Calculator’, inAlan Turing's Automatic Computing Engine (Copeland[2005])
  • Turing, A.M., 1947, ‘Lecture on the Automatic ComputingEngine’, in The Essential Turing (Copeland [2004])
  • Turing, A.M., and Wilkinson, J.H., 1946–7, ‘TheTuring-Wilkinson Lecture Series (1946-7)’, in Alan Turing'sAutomatic Computing Engine (Copeland [2005])
  • von Neumann, J., 1945, ‘First Draft of a Report on theEDVAC’, in Stern, N. From ENIAC to UNIVAC: An Appraisal ofthe Eckert-Mauchly Computers Bedford, Mass.: Digital Press (1981),pp. 181–246
  • Williams, F.C., 1975, ‘Early Computers at ManchesterUniversity’ The Radio and Electronic Engineer,45 (1975): 237–331
  • Wynn-Williams, C.E., 1932, ‘A Thyratron 'Scale of Two'Automatic Counter’ Proceedings of the Royal Society ofLondon, series A, 136: 312–324

Further Reading

  • Copeland, B.J., 2004, ‘Colossus — Its Origins andOriginators’ Annals of the History of Computing,26: 38–45
  • Metropolis, N., Howlett, J., Rota, G.C. (eds), 1980, A Historyof Computing in the Twentieth Century New York: AcademicPress
  • Randell, B. (ed.), 1982, The Origins of Digital Computers:Selected Papers Berlin: Springer-Verlag
  • Williams, M.R., 1997, A History of Computing TechnologyLos Alamitos: IEEE Computer Society Press

Academic Tools

How to cite this entry.
Preview the PDF version of this entry at the Friends of the SEP Society.
Look up this entry topic at the Internet Philosophy Ontology Project (InPhO).
Enhanced bibliography for this entry at PhilPapers, with links to its database.
Hardware

Other Internet Resources

Related Entries

computability and complexity | recursive functions | Turing, Alan | Turing machines

Copyright © 2006 by
B. Jack Copeland<jack.copeland@canterbury.ac.nz>

Florens Technology Development Laptops Reviews

Theories of technology attempt to explain the factors that shape technological innovation as well as the impact of technology on society and culture. Most contemporary theories of technology reject two previous views: the linear model of technological innovation and technological determinism. To challenge the linear model, today's theories of technology point to the historical evidence that technological innovation often gives rise to new scientific fields, and emphasizes the important role that social networks and cultural values play in shaping technological artifacts. To challenge technological determinism, today's theories of technology emphasize the scope of technical choice, which is greater than most laypeople realize; as science and technology scholars like to say, 'It could have been different.' For this reason, theorists who take these positions typically argue for greater public involvement in technological decision-making.

Social theories[edit]

'Social' theories focus on how humans and technology affect each other. Some theories focus on how decisions are made with humans and technology: humans and technology are equal in the decision, humans drive technology, and vice versa. The interactions used in a majority of the theories on this page look at individual human's interactions with technology, but there is a sub-group for the group of people interacting with technology. The theories described are purposefully vague and ambiguous, since the circumstances for the theories change as human culture and technology innovations change.

Society and Technology. Who dominates Who?

Descriptive approaches[edit]

  • Social construction of technology (SCOT) – argues that technology does not determine human action, but that human action shapes technology. Key concepts include:
    • interpretive flexibility: 'Technological artifacts are culturally constructed and interpreted ... By this, we mean not only is there flexibility in how people think of or interpret artifacts but also there is flexibility in how artifacts are designed.' Also, these technological artifacts [1] determine and shape what that specific technology tool will symbolize and represent in society or in a culture. This is in relation to the SCOT theory because it shows how humans symbolize technology, by shaping it.
    • Relevant social group: shares a particular set of meanings about an artifact
    • 'Closure' and stabilization: when the relevant social group has reached a consensus
    • Wider context: 'the sociocultural and political situation of a social group shapes its norms and values, which in turn influence the meaning given to an artifact'
Key authors include MacKenzie and Wajcman (1985).
  • Actor-network theory (ANT) – posits a heterogeneous network of humans and non-humans as equal interrelated actors. It strives for impartiality in the description of human and nonhuman actors and the reintegration of the natural and social worlds. For example, Latour (1992)[2] argues that instead of worrying whether we are anthropomorphizing technology, we should embrace it as inherently anthropomorphic: technology is made by humans, substitutes for the actions of humans, and shapes human action. What is important is the chain and gradients of actors' actions and competencies, and the degree to which we choose to have figurative representations. Key concepts include the inscription of beliefs, practices, relations into technology, which is then said to embody them. Key authors include Latour (1997)[3] and Callon (1999).[4]
  • Structuration theory – defines structures as rules and resources organized as properties of social systems. The theory employs a recursive notion of actions constrained and enabled by structures which are produced and reproduced by that action. Consequently, in this theory technology is not rendered as an artifact, but instead examines how people, as they interact with technology in their ongoing practices, enact structures which shape their emergent and situated use of that technology. Key authors include DeSanctis and Poole (1990),[5] and Orlikowski (1992).[6]
  • Systems theory – considers the historical development of technology and media with an emphasis on inertia and heterogeneity, stressing the connections between the artifact being built and the social, economic, political and cultural factors surrounding it. Key concepts include reverse salients when elements of a system lag in development with respect to others, differentiation, operational closure, and autopoietic autonomy. Key authors include Thomas P. Hughes (1992) and Luhmann (2000).[7]
  • Activity theory - considers an entire work/activity system (including teams, organizations, etc.) beyond just one actor or user. It accounts for the environment, history of the person, culture, role of the artifact, motivations, and complexity of real-life activity. One of the strengths of AT is that it bridges the gap between the individual subject and the social reality—it studies both through the mediating activity. The unit of analysis in AT is the concept of object-oriented, collective and culturally mediated human activity, or activity system.

Critical approaches[edit]

Critical theory goes beyond a descriptive account of how things are, to examine why they have come to be that way, and how they might otherwise be. Critical theory asks whose interests are being served by the status quo and assesses the potential of future alternatives to better serve social justice. According to Geuss's[8] definition, 'a critical theory, then, is a reflective theory which gives agents a kind of knowledge inherently productive of enlightenment and emancipation' (1964). Marcuse argued that whilst matters of technology design are often presented as neutral technical choices, in fact, they manifest political or moral values. Critical theory is a form of archaeology that attempt to get beneath common-sense understandings in order to reveal the power relationships and interests determining particular technological configuration and use.

Perhaps the most developed contemporary critical theory of technology is contained in the works of Andrew Feenberg including 'Transforming Technology' (2002).

  • Values in Design - asks how do we ensure a place for values (alongside technical standards such as speed, efficiency, and reliability) as criteria by which we judge the quality and acceptability of information systems and new media. How do values such as privacy, autonomy, democracy, and social justice become integral to conception, design, and development, not merely retrofitted after completion? Key thinkers include Helen Nissenbaum (2001).[9]

Group Theories[edit]

There are also a number of technology related theories that address how (media) technology affects group processes. Broadly, these theories are concerned with the social effects of communication media. Some (e.g., media richness) are concerned with questions of media choice (i.e., when to use what medium effectively). Other theories (social presence, SIDE, media naturalness) are concerned with the consequences of those media choices (i.e., what are the social effects of using particular communication media).

  • Social presence theory (Short, et al., 1976[10]) is a seminal theory of the social effects of communication technology. Its main concern is with telephony and telephone conferencing (the research was sponsored by the British Post Office, now British Telecom). It argues that the social impact of a communication medium depend on the social presence it allows communicators to have. Social presence is defined as a property of the medium itself: the degree of acoustic, visual, and physical contact that it allows. The theory assumes that more contact will increase the key components of 'presence': greater intimacy, immediacy, warmth and inter-personal rapport. As a consequence of social presence, social influence is expected to increase. In the case of communication technology, the assumption is that more text-based forms of interaction (e-mail, instant messaging) are less social, and therefore less conducive to social influence.
  • Media richness theory (Daft & Lengel, 1986)[11] shares some characteristics with social presence theory. It posits that the amount of information communicated differs with respect to a medium's richness. The theory assumes that resolving ambiguity and reducing uncertainty are the main goals of communication. Because communication media differ in the rate of understanding they can achieve in a specific time (with 'rich' media carrying more information), they are not all capable of resolving uncertainty and ambiguity well. The more restricted the medium's capacity, the less uncertainty and equivocality it is able to manage. It follows that the richness of the media should be matched to the task so as to prevent over simplification or complication.
  • Media naturalness theory (Kock, 2001; 2004)[12][13] builds on human evolution ideas and has been proposed as an alternative to media richness theory. Media naturalness theory argues that since our Stone Age hominid ancestors have communicated primarily face-to-face, evolutionary pressures have led to the development of a brain that is consequently designed for that form of communication. Other forms of communication are too recent and unlikely to have posed evolutionary pressures that could have shaped our brain in their direction. Using communication media that suppress key elements found in face-to-face communication, as many electronic communication media do, thus ends up posing cognitive obstacles to communication. This is particularly the case in the context of complex tasks (e.g., business process redesign, new product development, online learning), because such tasks seem to require more intense communication over extended periods of time than simple tasks.
  • Media synchronicity theory (MST, Dennis & Valacich, 1999) redirects richness theory towards the synchronicity of the communication.
  • The social identity model of deindividuation effects (SIDE) (Postmes, Spears and Lea 1999;[14] Reicher, Spears and Postmes, 1995;[15] Spears & Lea, 1994 [16]) was developed as a response to the idea that anonymity and reduced presence made communication technology socially impoverished (or 'deindividuated'). It provided an alternative explanation for these 'deindividuation effects' based on theories of social identity (e.g., Turner et al., 1987[17]). The SIDE model distinguishes cognitive and strategic effects of a communication technology. Cognitive effects occur when communication technologies make 'salient' particular aspects of personal or social identity. For example, certain technologies such as email may disguise characteristics of the sender that individually differentiate them (i.e., that convey aspects of their personal identity) and as a result more attention may be given to their social identity. The strategic effects are due to the possibilities, afforded by communication technology, to selectively communicate or enact particular aspects of identity, and disguise others. SIDE therefore sees the social and the technological as mutually determining, and the behavior associated with particular communication forms as the product or interaction of the two.
  • Time, interaction, and performance (TIP; McGrath, 1991)[18] theory describes work groups as time-based, multi-modal, and multi-functional social systems. Groups interact in one of the modes of inception, problem solving, conflict resolution, and execution. The three functions of a group are production (towards a goal), support (affective) and well-being (norms and roles).

Other Stances[edit]

Additionally, many authors have posed technology so as to critique and or emphasize aspects of technology as addressed by the mainline theories. For example, Steve Woolgar (1991)[19] considers technology as text in order to critique the sociology of scientific knowledge as applied to technology and to distinguish between three responses to that notion: the instrumental response (interpretive flexibility), the interpretivist response (environmental/organizational influences), the reflexive response (a double hermeneutic). Pfaffenberger (1992)[20] treats technology as drama to argue that a recursive structuring of technological artifacts and their social structure discursively regulate the technological construction of political power. A technological drama is a discourse of technological 'statements' and 'counterstatements' within the processes of technological regularization, adjustment, and reconstitution.

An important philosophical approach to technology has been taken by Bernard Stiegler,[21] whose work has been influenced by other philosophers and historians of technology including Gilbert Simondon and André Leroi-Gourhan.In the Schumpeterian and Neo-Schumpeterian theories technologies are critical factors of economic growth (Carlota Perez).[22]

Analytic theories[edit]

Finally, there are theories of technology which are not defined or claimed by a proponent, but are used by authors in describing existing literature, in contrast to their own or as a review of the field.

For example, Markus and Robey (1988)[23] propose a general technology theory consisting of the causal structures of agency (technological, organizational, imperative, emergent), its structure (variance, process), and the level (micro, macro) of analysis.

Orlikowski (1992)[24] notes that previous conceptualizations of technology typically differ over scope (is technology more than hardware?) and role (is it an external objective force, the interpreted human action, or an impact moderated by humans?) and identifies three models:

  1. Technological imperative: focuses on organizational characteristics which can be measured and permits some level of contingency
  2. Strategic choice: focuses on how technology is influenced by the context and strategies of decision-makers and users
  3. Technology as a trigger of structural change: views technology as a social object

DeSanctis and Poole (1994) similarly write of three views of technology's effects:

  1. Decision-making: the view of engineers associated with positivist, rational, systems rationalization, and deterministic approaches
  2. Institutional school: technology is an opportunity for change, focuses on social evolution, social construction of meaning, interaction and historical processes, interpretive flexibility, and an interplay between technology and power
  3. An integrated perspective (social technology): soft-line determinism, with joint social and technological optimization, structural symbolic interaction theory

Bimber (1998)[25] addresses the determinacy of technology effects by distinguishing between the:

  1. Normative: an autonomous approach where technology is an important influence on history only where societies attached cultural and political meaning to it (e.g., the industrialization of society)
  2. Nomological: a naturalistic approach wherein an inevitable technological order arises based on laws of nature (e.g., steam mill had to follow the hand mill).
  3. Unintended consequences: a fuzzy approach that is demonstrative that technology is contingent (e.g., a car is faster than a horse, but unbeknownst to its original creators become a significant source of pollution)

References[edit]

Citations[edit]

  1. ^Shields, Mark A. (2012). 'Technology and Social Theory (review)'. Technology and Culture. 53 (4): 918–920. doi:10.1353/tech.2012.0130. ISSN1097-3729. S2CID108711621.
  2. ^Latour, B. (1992). Where are the missing masses? The sociology of a few mundane artifacts. In Bijker, W., and Law, J., editors, Shaping Technology/Building Society. MIT Press, Cambridge, MA
  3. ^Latour, B. (1997). On Actor Network Theory: a few clarifications
  4. ^Callon, M. (1999). Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of Saint Brieuc Bay. In Biagioli, M., editor, The Science Studies Reader, pages 67–83. Routledge, New York.
  5. ^Desanctis, G. and Poole, M. S. (1994). Capturing the complexity in advanced technology use: adaptive structuration theory. Organization Science, 5(2):121-147
  6. ^Orlikowski, W.J. (1992). The duality of technology: rethinking the concept of technology in organizations. Organization Science, 3(3):398-427.
  7. ^Luhmann, N. (2000). The reality of the mass media. Stanford, Stanford, CA.
  8. ^Geuss, R. (1981) The Idea of a Critical Theory, Cambridge, Cambridge University Press.
  9. ^Nissenbaum, H. (2001). How computer systems embody values. Computer, 34(3):120-118.
  10. ^Short, J. A., Williams, E., and Christie, B. (1976). The social psychology of telecommunications. John Wiley & Sons, New York.
  11. ^Daft, R. L. and Lengel, R. H. (1986). Organizational information requirements, media richness and structural design. Management Science, 32(5):554-571
  12. ^Kock, N. (2001). The ape that used email: Understanding e-communication behavior through evolution theory. Communications of the Association for Information Systems, 5(3), 1-29.
  13. ^Kock, N. (2004). The psychobiological model: Towards a new theory of computer-mediated communication based on Darwinian evolution. Organization Science, 15(3), 327-348.
  14. ^Postmes, T., Spears, R., and Lea, M. (1999). Social identity, group norms, and deindividuation: Lessons from computer-mediated communication for social influence in the group. In N. Ellemers, R. Spears, B. D., editor, Social Identity: Context, Commitment, Content. Blackwell., Oxford.
  15. ^Reicher, S., Spears, R., & Postmes, T. (1995). A social identity model of deindividuation phenomena. In W. Stroebe & M. Hewstone (Eds.), European Review of Social Psychology (Vol. 6, pp. 161–198). Chichester: Wiley.
  16. ^Spears, R., & Lea, M. (1994). Panacea or panopticon? The hidden power in computer-mediated communication. Communication Research, 21, 427-459.
  17. ^Turner, J. C., Hogg, M. A., Oakes, P. J., Reicher, S., & Wetherell, M. S. (1987). Rediscovering the social group: A self-categorization theory. Oxford, England: Basil Blackwell.
  18. ^McGrath, J.E. (1991). Time, interaction, and performance (tip): A theory of groups. small group research. 22(2):147-174.
  19. ^Woolgar, S. (1991). The turn to technology in social studies of science. Science, Technology, & Human Values, 16(1):20-50.
  20. ^Pfaffenberger, B. (1992). Technological dramas. Science, Technology, & Human Values, 17(3):282-312.
  21. ^Stiegler, B. (1998). Technics and Time, 1: The Fault of Epimetheus. Stanford: Stanford University Press.
  22. ^Perez, Carlota (2009).Technological revolutions and techno-economic paradigms. Working Papers in Technology Governance and Economic Dynamics, Working Paper No. 20. (Norway and Tallinn University of Technology, Tallinn)
  23. ^Markus, M. and Robey, D. (1988). Information technology and organizational change: causal structure in theory and research. Management Science, 34:583-598.
  24. ^Orlikowski, W.J. (1992). The duality of technology: rethinking the concept of technology in organizations. Organization Science, 3(3):398-427.
  25. ^Bimber, B. (1998). Three faces of technological determinism. In Smith, M. and Marx, L., editors, Does Technology Drive History? The Dilemma of Technological Determinism, pages 79–100. MIT Press, Cambridge, MA.

Sources[edit]

  • Denis, A. and Valacich, J. (1999). Rethinking media richness: towards a theory of media synchronicity. Proceedings of the 32nd Hawaii International Conference on Systems Science.
  • Desanctis, G. and Poole, M. S. (1990). Understanding the use of group decision support systems: the theory of adaptive structuration. In J. Fulk, C. S., editor, Organizations and Communication Technology, pages 173–193. Sage, Newbury Park, CA.
  • MacKensie, D. and Wajcman, J (1985) The Social Shaping of Technology, Milton Keynes, Open University Press.
  • Pinch, T. and Bijker, W. (1992). The social construction of facts and artifacts: or how the sociology of science and the sociology of technology might benefit each other. In Bijker, W. and Law, J., editors, Shaping Technology/Building Society, pages 17–50. MIT Press, Cambridge, MA.

Technology Development Pdf

Pdf

Development Of Hardware Technology

Retrieved from 'https://en.wikipedia.org/w/index.php?title=Theories_of_technology&oldid=1006194381'




Comments are closed.